perm filename SEARLE[S86,JMC]1 blob
sn#814416 filedate 1986-04-04 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 searle[s86,jmc] Searle's Deturs
C00017 ENDMK
Cā;
searle[s86,jmc] Searle's Deturs
Searle's "Turing the Chinese Room" is concisely and
crisply written. It is also a pleasure to attempt to
imitate its vigorous way of dealing with people whose
opinions differ from those of the author.
For all its crispness and vigor, it is not entirely unmeaningless.
The trouble is that the crisply and tersely written "definitions" and
"propositions" have no clear meaning in any branch of knowledge.
For brevity I shall suppose that the reader has Searle's
paper at hand.
Definition 1 of "Strong AI". "... the ... digital computer ...
would thereby have a mind in exactly the same sense that human
beings have minds". The trouble is that "exactly the same"
sense isn't a clear concept. Searle can and does put whatever
he pleases into that "exactly the same sense". In my "Ascribing
Mental Qualities to Machines", the paper in which I justify
ascribing simple beliefs to devices as simple as thermostats, I
state in what sense I take mental qualities.
"Proposition 1: Programs are purely formal (i.e. syntactical).
I take it this proposition needs no explanation for the readers
of this journal". I don't know what journal is referred to, but
it is a bluff to say that the proposition needs no explanation.
Program texts are syntactical. However, there is a well developed
theory of the the semantics of programming languages in which
programs are given semantics in essentially the same way as sentences
of logic are given semantics - the prescription of domains and
predicates and functions operating on these domains.
"Proposition 2. Syntax is neither equivalent to nor sufficient by
itself for semantics." This can be taken as a reference to
the above fact that assigning semantics to a logical or programming
language is a step beyond merely prescribing the strings of symbols
that constitute wffs of the language. Perhaps it is also a reference
to incompleteness theorems. The sense in which it is taken is
relevant to what further conclusions can be drawn from it.
"Proposition 3. Minds have mental contents (i.e. semantic contents)"
What kind of entity a mind is supposed to be we aren't told. I
suppose, since references aren't given, that a common sense meaning
is implied. I certainly prefer interpretations in which minds
have states about which it can sometimes be said that in that state a mind
has certain beliefs and desires. For reasons discussed in
my "Ascribing ..." paper, I think it most fruitful scientifically
to treat mental qualities in a fragmentary way, i.e. some quite simple
systems can be ascribed some mental qualities. What qualities
a system would require before Searle would admit it as a mind
is unclear, but an unpleasant fellow like me might worry that
he would pack so many in that no possible system would have
all of them. If he defines mind he should prove that minds
exist satisfying his definitions.
"Conclusion 1 : Having a program -- any program by itself --
is neither sufficient for nor equivalent to having a mind"
While the meaning of this isn't clear, taken in one attractive
sense, it seems to be false. Since the argument is based on
the idea that programs don't have semantics, and programming
languages are often given interpretations, i.e. semantics
in the Tarski sense, then programmed systems can have minds
in exactly the same sense in which programs can have semantics.
Without precise definitions we can't go further.
While I think it scientifically fruitful to ascribe mental
qualities to certain machines in the same sense in which it
is fruitful to ascribe semantics to programs, I agree that
humans have much less trivial sets of mental qualities than
can be ascribed to even the most sophisticated of today's
programs. It is an important scientific problem to identify
what these qualities are. (I say scientific, because I have
just about abandoned hope that philosophers can be persuaded
to take the problems in a sufficiently precise and detailed
scientific sense to allow them to make scientific contributions to the
problem. For example, no philosopher seems to be willing even
to try to disentangle the components of consciousness, including
the self as a physical object, the ways of comparing oneself
with other people so as to learn more about oneself and about
other people, observations that a person can make of his own state
of mind, and how the ability of a computer to read both its own
manual of operations and its own program fits in).
Remark on the Chinese room: In the present paper, the Chinese
room is almost irrelevant, but I will repeat a remark I made
in my BBS commentary on Searle's paper. Consider the very
strong form of the puzzle in which a person has a complete
set of rules for handling the Chinese texts in his mind, so
by following the rules mentally he conducts the dialog. He
indeed may not realize he is conducting a Chinese dialog.
However, the system still may be ascribed a knowledge of
Chinese. We can even imagine that our hero began with rules
for simulating a Chinese one year old, and the rules included
the possibility of self-modification so that the dialogs became
more sophisticated via a Chinese conventional education.
We have here a situation that doesn't actual occur with people
but which occurs with computers all the time. Namely, a program
in machine language interprets another language. Machines even
time-share among different interpreters which would correspond
to our here conducting three independent Chinese dialogs and
and one in German. In this case, different knowledge and
desires should be ascribed to each program being interpreted.
With normal humans, however, only one program is being executed,
and it doesn't cause confusion to ascribe the mental qualities to
the person as a physical object.
Searle then goes on to use the word "cause". There I
can't follow him, since I have read only a little of the
philosophical literature on "causality" and can't guess which
if any accepted sense he is using. It obviously not the common
sense usage of "Throwing the baseball through the window caused
it to break". Naturally I suspect the worst, and here's where
I imagine the legerdemain lies.
"But the actual powers of the brain by which it causes mental states have to
do with specific neurobiological features, specific electrochemical
properties of neurons, synapses, synaptic clefts, neurotransmitters,
boutons, nodules and all the rest of it. We summarize this brute
empirical fact about how nature works as:
Proposition 4. Brains cause minds."
The claim that this ascription is an empirical fact is, as the
philosophers like to say, confused - a category error or something
like that. Why can't I just as well say, "computers cause
the semantics of programs." I won't say it, because I don't
want to pretend to understand Searle's usage of "cause".
Now I'll skip a chain of assertions about what
causes what and go to a rather typical Searle pomposity,
namely,
"But, once again, does anyone in AI really question it? Is
there someone in AI so totally innocent of biological
knowledge that he thinks that the specfic biochemical powers
of human nervous systems can be duplicated in silicon chips
(transistors, vacuum tubes -- you name it)?"
This seems to assume that Searle has already proved that the biochemistry
is relevant to what mental qualities a system may have. But perhaps not.
Who knows what surprises may lurk in the phrase "biochemical powers"?
Maybe, as Searle suggests earlier, we've got him wrong. Perhaps the
phrase doesn't presuppose anything about the system being built up from
DNA, proteins and cells.
Finally, we have
"It can no longer be doubted that the classical concep-
tion of AI, the view that I have called strong AI, is pretty
much obviously false and rests on very simple mistakes. The
question then arises, if strong AI is false what ought AI to
be doing ? What is a reasonable research project for weak
AI? That is a topic for another paper."
Frankly, I think think this is a bluff. Distinguishing
"weak AI" is a rhetorical device that deflects attempts to
demand empirical consequences of the denial of "strong AI".
At the NY Academy meeting symposium, I predicted that someday
Searle's secretary would tell him something like, "The computer believes
that you haven't accounted for your travel advance. How will
you correct her?" If I recall correctly, he said that this
usage on her part would be ok, but it was just weak AI. This
left me entirely confused about what Searle had been asserting.
Enough years have elapsed since the original Chinese room paper
so that we can reasonably ask Searle for at least a few hundred
words on what the "electronic powers" of weak AI programmed
into a computer might be.